parallel program - определение. Что такое parallel program
Diclib.com
Словарь ChatGPT
Введите слово или словосочетание на любом языке 👆
Язык:

Перевод и анализ слов искусственным интеллектом ChatGPT

На этой странице Вы можете получить подробный анализ слова или словосочетания, произведенный с помощью лучшей на сегодняшний день технологии искусственного интеллекта:

  • как употребляется слово
  • частота употребления
  • используется оно чаще в устной или письменной речи
  • варианты перевода слова
  • примеры употребления (несколько фраз с переводом)
  • этимология

Что (кто) такое parallel program - определение

PROGRAMMING PARADIGM IN WHICH MANY CALCULATIONS OR THE EXECUTION OF PROCESSES ARE CARRIED OUT SIMULTANEOUSLY
Parallel computer; Parallel processor; Parallel computation; Parallel programming; Parallel Programming; Parallel computers; Concurrent language; Concurrent event; Computer Parallelism; Parallel machine; Concurrent (programming); Parallel architecture; Parallel Computing; Parallelisation; Parallelization; Parallelized; Multicomputer; Parallelism (computing); Parellel computing; Superword Level Parallelism; Parallel programming language; Message-driven parallel programming; Parallel computer hardware; Parallel program; Parallel code; Parallel language; Parallel processing (computing); Multiple processing elements; Parallel execution units; History of parallel computing; Parallel hardware; Parallel processing computer
  • A graphical representation of [[Amdahl's law]]. The speedup of a program from parallelization is limited by how much of the program can be parallelized. For example, if 90% of the program can be parallelized, the theoretical maximum speedup using parallel computing would be 10 times no matter how many processors are used.
  • Beowulf cluster]]
  • Blue Gene/L]] massively parallel [[supercomputer]]
  • The [[Cray-1]] is a vector processor
  • 1=IPC = 1}}).
  • A graphical representation of [[Gustafson's law]]
  • Blue Gene/P]] [[massively parallel]] [[supercomputer]]
  • [[ILLIAC IV]], "the most infamous of supercomputers"<ref name="infamous"/>
  • 1=IPC = 0.2 < 1}}).
  • A logical view of a [[non-uniform memory access]] (NUMA) architecture. Processors in one directory can access that directory's memory with less latency than they can access memory in the other directory's memory.
  • Tesla GPGPU card]]
  • 1=IPC = 2 > 1}}).
  • Taiwania 3 of [[Taiwan]], a parallel supercomputing device that joined [[COVID-19]] research.
Найдено результатов: 3331
parallel processor         
<parallel> A computer with more than one {central processing unit}, used for parallel processing. (1996-04-23)
Parallel computing         
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time.
parallel computing         
parallel computer         
Parallel study         
RESEARCH MODEL IN WHICH MULTIPLE GROUPS RECEIVE EXPERIMENTAL INTERVENTIONS
Parallel groups study; Parallel groups design
A parallel study is a type of clinical study where two groups of treatments, A and B, are given so that one group receives only A while another group receives only B. Other names for this type of study include "between patient" and "non-crossover".
parallel bars         
  • A gymnast performs on the parallel bars
APPARATUS USED IN MEN'S ARTISTIC GYMNASTICS
Parallel Bars; Gymnastics parallel bars; P bars; Parallel bars (gymnastics)
Parallel bars consist of a pair of horizontal bars on posts, which are used for doing physical exercises.
N-PLURAL
parallel bars         
  • A gymnast performs on the parallel bars
APPARATUS USED IN MEN'S ARTISTIC GYMNASTICS
Parallel Bars; Gymnastics parallel bars; P bars; Parallel bars (gymnastics)
¦ plural noun a pair of parallel rails on posts, used in gymnastics.
Soviet parallel cinema         
  • Image of Boris Yukhananov
  • Mikhail Gorbachev in 1994
  • Example Illustration of Film Noir Style
1980S UNDERGROUND FILM MOVEMENT IN THE SOVIET UNION
Russian Parallel Cinema; Soviet Parallel Cinema
Soviet parallel cinema is a genre of film and underground cinematic movement that occurred in the Soviet Union in the 1970s onwards. The term parallel cinema (known as parallel’noe kino) was first associated with the samizdat films made out of the official Soviet state system.
Computer program         
  • A symbolic representation of an ALU
  • Computer memory map
  • DEC]] [[VT100]] (1978) was a widely used [[computer terminal]].
  • Switches for manual input on a [[Data General Nova]] 3, manufactured in the mid-1970s
  • Lovelace's description from Note G
  • [["Hello, World!" program]] by [[Brian Kernighan]] (1978)
  • A kernel connects the application software to the hardware of a computer.
  • NOT gate
  • A computer program written in an imperative language
  • hardware]].
  • Artist's depiction of Sacramento State University's Intel 8008 microcomputer (1972)
  • A sample function-level data-flow diagram
  • Fran Bilas]] programmed the [[ENIAC]] by moving cables and setting switches.
  • right
  • Physical memory is scattered around RAM and the hard disk. Virtual memory is one continuous block.
SEQUENCE OF INSTRUCTIONS WRITTEN IN PROGRAMMING LANGUAGE TO PERFORM A SPECIFIED TASK WITH A COMPUTER
Computer programme; Computer code; Computer programs; Software program; Program (programming); Program (computer science); Program (computing); Computer Program; Software code; Program (computer); Computer Programs; Program file; Computer program code; Program module
A computer program is a sequence or set of instructions in a programming language for a computer to execute. Computer programs are one component of software, which also includes documentation and other intangible components.
computer program         
  • A symbolic representation of an ALU
  • Computer memory map
  • DEC]] [[VT100]] (1978) was a widely used [[computer terminal]].
  • Switches for manual input on a [[Data General Nova]] 3, manufactured in the mid-1970s
  • Lovelace's description from Note G
  • [["Hello, World!" program]] by [[Brian Kernighan]] (1978)
  • A kernel connects the application software to the hardware of a computer.
  • NOT gate
  • A computer program written in an imperative language
  • hardware]].
  • Artist's depiction of Sacramento State University's Intel 8008 microcomputer (1972)
  • A sample function-level data-flow diagram
  • Fran Bilas]] programmed the [[ENIAC]] by moving cables and setting switches.
  • right
  • Physical memory is scattered around RAM and the hard disk. Virtual memory is one continuous block.
SEQUENCE OF INSTRUCTIONS WRITTEN IN PROGRAMMING LANGUAGE TO PERFORM A SPECIFIED TASK WITH A COMPUTER
Computer programme; Computer code; Computer programs; Software program; Program (programming); Program (computer science); Program (computing); Computer Program; Software code; Program (computer); Computer Programs; Program file; Computer program code; Program module

Википедия

Parallel computing

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

Parallel computing is closely related to concurrent computing—they are frequently used together, and often conflated, though the two are distinct: it is possible to have parallelism without concurrency, and concurrency without parallelism (such as multitasking by time-sharing on a single-core CPU). In parallel computing, a computational task is typically broken down into several, often many, very similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion. In contrast, in concurrent computing, the various processes often do not address related tasks; when they do, as is typical in distributed computing, the separate tasks may have a varied nature and often require some inter-process communication during execution.

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, and grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks.

In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms, particularly those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting optimal parallel program performance.

A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law, which states that it is limited by the fraction of time for which the parallelization can be utilised.